383 research outputs found

    Metamodel variability analysis combining bootstrapping and validation techniques

    Get PDF
    Research on metamodel-based optimization has received considerably increasing interest in recent years, and has found successful applications in solving computationally expensive problems. The joint use of computer simulation experiments and metamodels introduces a source of uncertainty that we refer to as metamodel variability. To analyze and quantify this variability, we apply bootstrapping to residuals derived as prediction errors computed from cross-validation. The proposed method can be used with different types of metamodels, especially when limited knowledge on parameters’ distribution is available or when a limited computational budget is allowed. Our preliminary experiments based on the robust version of the EOQ model show encouraging results

    Dynamic Objectives Aggregation in Multi-objective Evolutionary Optimization

    Get PDF
    Several approaches for solving multi-objective optimization problems entail a form of scalarization of the objectives. This paper proposes a study of different dynamic objectives aggregation methods in the context of evolutionary algorithms. These methods are mainly based on both weighted sum aggregations and curvature variations. A comparison analysis is presented on the basis of a campaign of computational experiments on a set of benchmark problems from the literature.Multi-objective optimization, Evolutionary algorithms, Aggregate objective functions

    DOAM for Evolutionary Portfolio Optimization: a computational study.

    Get PDF
    In this work, the ability of the Dynamic Objectives Aggregation Methods to solve the portfolio rebalancing problem is investigated conducting a computational study on a set of instances based on real data. The portfolio model considers a set of realistic constraints and entails the simultaneously optimization of the risk on portfolio, the expected return and the transaction cost.

    Collapse of superhydrophobicity on nanopillared surfaces

    Full text link
    The mechanism of the collapse of the superhydrophobic state is elucidated for submerged nanoscale textures forming a three-dimensional interconnected vapor domain. This key issue for the design of nanotextures poses significant simulation challenges as it is characterized by diverse time and length scales. State-of-the-art atomistic rare events simulations are applied for overcoming the long time scales connected with the large free energy barriers. In such interconnected surface cavities wetting starts with the formation of a liquid finger between two pillars. This break of symmetry induces a more gentle bend in the rest of the liquid-vapor interface, which triggers the wetting of the neighboring pillars. This collective mechanism, involving the wetting of several pillars at the same time, could not be captured by previous atomistic simulations using surface models comprising a small number of pillars (often just one). Atomistic results are interpreted in terms of a sharp-interface continuum model which suggests that line tension, condensation, and other nanoscale phenomena play a minor role in the simulated conditions

    Unraveling the Salvinia paradox: design principles for submerged superhydrophobicity

    Get PDF
    The complex structure of the Salvinia molesta is investigated via rare event molecular dynamics simulations. Results show that a hydrophilic/hydrophobic patterning together with a re-entrant geometry control the free energy barriers for bubble nucleation and for the Cassie-Wenzel transition. This natural paradigm is translated into simple macroscopic design criteria for engineering robust superhydrophobicity in submerged applications

    Evaluation of the quantiles and superquantiles of the makespan in interval valued activity networks

    Get PDF
    This paper deals with the evaluation of quantile-based risk measures for the makespan in scheduling problems represented as temporal networks with uncer tainties on the activity durations. More specifically, for each activity only the interval for its possible duration values is known in advance to both the sched uler and the risk analyst. Given a feasible schedule, we calculate the quantiles and the superquantiles of the makespan which are of interest as risk indicators in various applications. To this aim we propose and test a set of novel algorithms to determine rapid and accurate numerical estimations based on the calculation of theoretically proven lower and upper bounds. An extensive experimental campaign compu tationally shows the validity of the proposed methods, and allows to highlight their performances through the comparison with respect to the state-of-the-art algorithms

    An efficient decomposition approach for surgical planning

    Get PDF
    This talk presents an efficient decomposition approach to surgical planning. Given a set of surgical waiting lists (one for each discipline) and an operating theater, the problem is to decide the room-to-discipline assignment for the next planning period (Master Surgical Schedule), and the surgical cases to be performed (Surgical Case Assignment), with the objective of optimizing a score related to priority and current waiting time of the cases. While in general MSS and SCA may be concurrently found by solving a complex integer programming problem, we propose an effective decomposition algorithm which does not require expensive or sophisticated computational resources, and is therefore suitable for implementation in any real-life setting. Our decomposition approach consists in first producing a number of subsets of surgical cases for each discipline (potential OR sessions), and select a subset of them. The surgical cases in the selected potential sessions are then discarded, and only the structure of the MSS is retained. A detailed surgical case assignment is then devised filling the MSS obtained with cases from the waiting lists, via an exact optimization model. The quality of the plan obtained is assessed by comparing it with the plan obtained by solving the exact integrated formulation for MSS and SCA. Nine different scenarios are considered, for various operating theater sizes and management policies. The results on instances concerning a medium-size hospital show that the decomposition method produces comparable solutions with the exact method in much smaller computation time

    Ab Antiquo: Neural Proto-language Reconstruction

    Full text link
    Historical linguists have identified regularities in the process of historic sound change. The comparative method utilizes those regularities to reconstruct proto-words based on observed forms in daughter languages. Can this process be efficiently automated? We address the task of proto-word reconstruction, in which the model is exposed to cognates in contemporary daughter languages, and has to predict the proto word in the ancestor language. We provide a novel dataset for this task, encompassing over 8,000 comparative entries, and show that neural sequence models outperform conventional methods applied to this task so far. Error analysis reveals variability in the ability of neural model to capture different phonological changes, correlating with the complexity of the changes. Analysis of learned embeddings reveals the models learn phonologically meaningful generalizations, corresponding to well-attested phonological shifts documented by historical linguistics.Comment: Accepted as a long paper in NAACL2

    Relaxation of a steep density gradient in a simple fluid: comparison between atomistic and continuum modeling

    Full text link
    We compare dynamical nonequilibrium molecular dynamics and continuum simulations of the dynamics of relaxation of a fluid system characterized by a non uniform density profile. Results match quite well as long as the lengthscale of density nonuniformities are greater than the molecular scale (10 times the molecular size). In presence of molecular scale features some of the continuum fields (e.g. density and momentum) are in good agreement with atomistic counterparts, but are smoother. On the contrary, other fields, such at the temperature field, present very large difference with respect to reference (atomistic) ones. This is due to the limited accuracy of some of the empirical relations used in continuum models, the equation of state of the fluid in the example considered
    • …
    corecore